分子动力学模拟是科学的基石,允许从系统的热力学调查以分析复杂的分子相互作用。通常,为了创建扩展的分子轨迹,可以是计算昂贵的过程,例如,在运行$ ab-initio $ simulations时。因此,重复这样的计算以获得更准确的热力学或在由细粒度量子相互作用产生的动态中获得更高的分辨率可以是时间和计算的。在这项工作中,我们探讨了不同的机器学习(ML)方法,以提高在后处理步骤内按需的分子动力学轨迹的分辨率。作为概念证明,我们分析了神经杂物,哈密顿网络,经常性神经网络和LSTM等双向神经网络的表现,以及作为参考的单向变体,用于分子动力学模拟(这里是: MD17数据集)。我们发现Bi-LSTMS是表现最佳的模型;通过利用恒温轨迹的局部时对称,它们甚至可以学习远程相关性,并在分子复杂性上显示高稳健性。我们的模型可以达到轨迹插值中最多10美元^ {-4}的准确度,同时忠实地重建了几个无奈复杂的高频分子振动的全周期,使学习和参考轨迹之间的比较难以区分。该工作中报告的结果可以作为更大系统的基线服务(1),以及(2)用于建造更好的MD集成商。
translated by 谷歌翻译
Massive data corpora like WebText, Wikipedia, Conceptual Captions, WebImageText, and LAION have propelled recent dramatic progress in AI. Large neural models trained on such datasets produce impressive results and top many of today's benchmarks. A notable omission within this family of large-scale datasets is 3D data. Despite considerable interest and potential applications in 3D vision, datasets of high-fidelity 3D models continue to be mid-sized with limited diversity of object categories. Addressing this gap, we present Objaverse 1.0, a large dataset of objects with 800K+ (and growing) 3D models with descriptive captions, tags, and animations. Objaverse improves upon present day 3D repositories in terms of scale, number of categories, and in the visual diversity of instances within a category. We demonstrate the large potential of Objaverse via four diverse applications: training generative 3D models, improving tail category segmentation on the LVIS benchmark, training open-vocabulary object-navigation models for Embodied AI, and creating a new benchmark for robustness analysis of vision models. Objaverse can open new directions for research and enable new applications across the field of AI.
translated by 谷歌翻译
Scaling up neural networks has led to remarkable performance across a wide range of tasks. Moreover, performance often follows reliable scaling laws as a function of training set size, model size, and compute, which offers valuable guidance as large-scale experiments are becoming increasingly expensive. However, previous work on scaling laws has primarily used private data \& models or focused on uni-modal language or vision learning. To address these limitations, we investigate scaling laws for contrastive language-image pre-training (CLIP) with the public LAION dataset and the open-source OpenCLIP repository. Our large-scale experiments involve models trained on up to two billion image-text pairs and identify power law scaling for multiple downstream tasks including zero-shot classification, retrieval, linear probing, and end-to-end fine-tuning. We find that the training distribution plays a key role in scaling laws as the OpenAI and OpenCLIP models exhibit different scaling behavior despite identical model architectures and similar training recipes. We open-source our evaluation workflow and all models, including the largest public CLIP models, to ensure reproducibility and make scaling laws research more accessible. Source code and instructions to reproduce this study will be available at https://github.com/LAION-AI/scaling-laws-openclip
translated by 谷歌翻译
Changing how pre-trained models behave -- e.g., improving their performance on a downstream task or mitigating biases learned during pre-training -- is a common practice when developing machine learning systems. In this work, we propose a new paradigm for steering the behavior of neural networks, centered around \textit{task vectors}. A task vector specifies a direction in the weight space of a pre-trained model, such that movement in that direction improves performance on the task. We build task vectors by subtracting the weights of a pre-trained model from the weights of the same model after fine-tuning on a task. We show that these task vectors can be modified and combined together through arithmetic operations such as negation and addition, and the behavior of the resulting model is steered accordingly. Negating a task vector decreases performance on the target task, with little change in model behavior on control tasks. Moreover, adding task vectors together can improve performance on multiple tasks at once. Finally, when tasks are linked by an analogy relationship of the form ``A is to B as C is to D", combining task vectors from three of the tasks can improve performance on the fourth, even when no data from the fourth task is used for training. Overall, our experiments with several models, modalities and tasks show that task arithmetic is a simple, efficient and effective way of editing models.
translated by 谷歌翻译
The existence of metallic implants in projection images for cone-beam computed tomography (CBCT) introduces undesired artifacts which degrade the quality of reconstructed images. In order to reduce metal artifacts, projection inpainting is an essential step in many metal artifact reduction algorithms. In this work, a hybrid network combining the shift window (Swin) vision transformer (ViT) and a convolutional neural network is proposed as a baseline network for the inpainting task. To incorporate metal information for the Swin ViT-based encoder, metal-conscious self-embedding and neighborhood-embedding methods are investigated. Both methods have improved the performance of the baseline network. Furthermore, by choosing appropriate window size, the model with neighborhood-embedding could achieve the lowest mean absolute error of 0.079 in metal regions and the highest peak signal-to-noise ratio of 42.346 in CBCT projections. At the end, the efficiency of metal-conscious embedding on both simulated and real cadaver CBCT data has been demonstrated, where the inpainting capability of the baseline network has been enhanced.
translated by 谷歌翻译
Mechanistic cardiac electrophysiology models allow for personalized simulations of the electrical activity in the heart and the ensuing electrocardiogram (ECG) on the body surface. As such, synthetic signals possess known ground truth labels of the underlying disease and can be employed for validation of machine learning ECG analysis tools in addition to clinical signals. Recently, synthetic ECGs were used to enrich sparse clinical data or even replace them completely during training leading to improved performance on real-world clinical test data. We thus generated a novel synthetic database comprising a total of 16,900 12 lead ECGs based on electrophysiological simulations equally distributed into healthy control and 7 pathology classes. The pathological case of myocardial infraction had 6 sub-classes. A comparison of extracted features between the virtual cohort and a publicly available clinical ECG database demonstrated that the synthetic signals represent clinical ECGs for healthy and pathological subpopulations with high fidelity. The ECG database is split into training, validation, and test folds for development and objective assessment of novel machine learning algorithms.
translated by 谷歌翻译
使用机器学习来描述动态医疗系统是一个充满挑战的主题,具有广泛的应用程序。在这项工作中,描述了纯粹基于测量数据的糖尿病患者血糖水平进行建模的可能性。影响变量胰岛素和卡路里的组合用于寻找可解释的模型。人体外部物质的吸收速度在很大程度上取决于外部影响,这就是为什么添加时间班的原因。重点放在确定最佳时移,这些时移提供具有良好预测准确性的强大模型,这些模型与其他未知的外部影响无关。该建模纯粹基于使用非线性动力学的稀疏鉴定的测量数据。确定一个微分方程,从初始值开始,模拟了血糖动力学。通过将最佳模型应用于测试数据,我们可以证明可以使用微分方程来模拟长期的血糖动力学,很少会影响变量。
translated by 谷歌翻译
人体运动的实时跟踪对于AR/VR中的互动和沉浸式体验至关重要。但是,有关人体的传感器数据非常有限,可以从独立的可穿戴设备(例如HMD(头部安装设备)或AR眼镜)获得。在这项工作中,我们提出了一个强化学习框架,该框架从HMD和两个控制器中获取稀疏信号,并模拟合理且身体上有效的全身运动。在训练过程中,使用高质量的全身运动作为密集的监督,一个简单的策略网络可以学会为角色,步行和慢跑的角色输出适当的扭矩,同时紧随输入信号。我们的结果表明,即使输入仅是HMD的6D变换,也没有对下半身进行任何观察到的地面真理的惊人相似的腿部运动。我们还表明,单一政策可以对各种运动风格,不同的身体尺寸和新颖的环境都有坚固的态度。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
机器学习和特别是强化学习(RL)在帮助我们了解神经决策过程方面非常成功。但是,RL在理解其他神经过程中的作用,尤其是运动学习的探索程度要少得多。为了探索这种联系,我们研究了最近的深度RL方法与基于错误的学习神经科学中的主要运动学习框架相对应。可以使用镜面反转适应范式探测基于错误的学习,在该范式中,它产生了独特的定性预测,这些预测在人类中观察到。因此,我们在镜面逆向上测试了现代深度RL算法的三个主要家庭。令人惊讶的是,所有算法都无法模仿人类的行为,并且确实表现出与基于错误的学习预测的行为。为了填补这一空白,我们引入了一种新颖的深度RL算法:基于模型的确定性策略梯度(MB-DPG)。 MB-DPG通过明确依靠观察到的动作结果来从基于错误的学习中汲取灵感。我们在镜像和旋转扰动下显示MB-DPG捕获(人)基于错误的学习。接下来,我们以MB-DPG的形式展示了基于错误的学习,比基于复杂的ARM的到达任务的规范无模型算法更快,同时比基于模型的RL更适合(正向)模型错误。这些发现突出了当前的深度RL方法与人类电动机适应之间的差距,并提供了缩小这一差距的途径,从而促进了两个领域之间未来的有益相互作用。
translated by 谷歌翻译